Learning to Identify and Track Imaginary Objects Implied by Gestures

نویسندگان

  • Andreya Piplica
  • Alexandra Olivier
  • Allison Petrosino
  • Kevin Gold
چکیده

A vision-based machine learner is presented that learns characteristic hand and object movement patterns for using certain objects, and uses this information to recreate the ”imagined” object when the gesture is performed without the object. To classify the gestures/objects, Hidden Markov Models (HMMs) are trained on the moment-to-moment velocity and shape of the object-manipulating hand. Object identification using the Forward-Backward algorithm achieved 89% identification accuracy when deciding between 6 objects. Two methods for rotating and positioning imaginary objects in the frame were compared. One used a modified HMM to smooth the observed rotation of the hand, with mixtures of Von Mises distributions. The other used least squares regression to determine the object rotation as a function of hand location, and provided more accurate rotational positioning. The method was adapted to real-time classification from a low-fps webcam stream and still succeeds when the testing frame rate is much lower than training.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Neural Network Performance Analysis for Real Time Hand Gesture Tracking Based on Hu Moment and Hybrid Features

This paper presents a comparison study between the multilayer perceptron (MLP) and radial basis function (RBF) neural networks with supervised learning and back propagation algorithm to track hand gestures. Both networks have two output classes which are hand and face. Skin is detected by a regional based algorithm in the image, and then networks are applied on video sequences frame by frame in...

متن کامل

Author's Personal Copy the Coordinate Systems Used in Visual Tracking

Tracking moving objects is a fundamental attentional operation. Here we ask which coordinate system is used to track objects: retinal (retinotopic), scene-centered (allocentric), or both? Observers tracked three of six disks that were confined to move within an imaginary square. By moving either the imaginary square (and thus the disks contained within), the fixation cross, or both, we could dr...

متن کامل

Unpacking the Ontogeny of Gesture Understanding: How Movement Becomes Meaningful Across Development.

Gestures, hand movements that accompany speech, affect children's learning, memory, and thinking (e.g., Goldin-Meadow, 2003). However, it remains unknown how children distinguish gestures from other kinds of actions. In this study, 4- to 9-year-olds (n = 339) and adults (n = 50) described one of three scenes: (a) an actor moving objects, (b) an actor moving her hands in the presence of objects ...

متن کامل

Causal Analysis for Visual Gesture Understanding

We are exploring the use of high-level knowledge about bodies in the visual understanding of gesture. Our hypothesis is that many gestures are metaphorically derived from the motor programs of our everyday interactions with objects and people. For example, many dismissive gestures look like an imaginary object is being brushed or tossed away. At the discourse level, this implicit mass represent...

متن کامل

Navigating a 3D virtual environment of learning objects by hand gestures

This paper presents a gesture-based Human-Computer Interface (HCI) to navigate a learning object repository mapped in a 3D virtual environment. With this interface, the user can access the learning objects by controlling an avatar car using gestures. The Haar-like features and the AdaBoost learning algorithm are used for our gesture recognition to achieve real-time performance and high recognit...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010